10 research outputs found

    New methods for the characterization of essential distributions

    No full text
    The detailed analysis of polymeric materials is one of the necessary steps to elucidate the relationship between the chemical distributions of a polymer and the functional properties of a material. Within the UNMATCHED project (UNderstanding MATerials by CHaracterizing Essential Distributions) many techniques have been investigated – or further developed – to aid in the analysis of such materials. The primary focus of this thesis was on using liquid chromatography (LC) in innovative ways to analyze polymer chemical distributions. Additionally, chemometric strategies that help improve the interpretability of the data obtained from these methods were investigated and documented in later chapters

    New methods for the characterization of essential distributions

    No full text

    Critical comparison of background correction algorithms used in chromatography

    Get PDF
    The objective of the present work was to make a quantitative and critical comparison of a number of drift and noise-removal algorithms, which were proven useful by other researchers, but which had never been compared on an equal basis. To make a rigorous and fair comparison, a data generation tool is developed in this work, which utilizes a library of experimental backgrounds, as well as peak shapes obtained from curve fitting on experimental data. Several different distribution functions are used, such as the log-normal, bi-Gaussian, exponentially convoluted Gaussian, exponentially modified Gaussian and modified Pearson VII distributions. The tool was used to create a set of hybrid (part experimental, part simulated) data, in which the background and all peak profiles and areas are known. This large data set (500 chromatograms) was analysed using seven different drift-correction and five different noise-removal algorithms (35 combinations). Root-mean square errors and absolute errors in peak area were determined and it was shown that in most cases the combination of sparsity-assisted signal smoothing and asymmetrically reweighted penalized least-squares resulted in the smallest errors for relatively low-noise signals. However, for noisier signals the combination of sparsity-assisted signal smoothing and a local minimum value approach to background correction resulted in lower absolute errors in peak area. The performance of correction algorithms was studied as a function of the density and coverage of peaks in the chromatogram, shape of the background signal, and noise levels. The developed data-generation tool is published along with this article, so as to allow similar studies with other simulated data sets and possibly other algorithms. The rigorous assessment of correction algorithms in this work may facilitate further automation of data-analysis workflows

    Critical comparison of background correction algorithms used in chromatography

    No full text
    The objective of the present work was to make a quantitative and critical comparison of a number of drift and noise-removal algorithms, which were proven useful by other researchers, but which had never been compared on an equal basis. To make a rigorous and fair comparison, a data generation tool is developed in this work, which utilizes a library of experimental backgrounds, as well as peak shapes obtained from curve fitting on experimental data. Several different distribution functions are used, such as the log-normal, bi-Gaussian, exponentially convoluted Gaussian, exponentially modified Gaussian and modified Pearson VII distributions. The tool was used to create a set of hybrid (part experimental, part simulated) data, in which the background and all peak profiles and areas are known. This large data set (500 chromatograms) was analysed using seven different drift-correction and five different noise-removal algorithms (35 combinations). Root-mean square errors and absolute errors in peak area were determined and it was shown that in most cases the combination of sparsity-assisted signal smoothing and asymmetrically reweighted penalized least-squares resulted in the smallest errors for relatively low-noise signals. However, for noisier signals the combination of sparsity-assisted signal smoothing and a local minimum value approach to background correction resulted in lower absolute errors in peak area. The performance of correction algorithms was studied as a function of the density and coverage of peaks in the chromatogram, shape of the background signal, and noise levels. The developed data-generation tool is published along with this article, so as to allow similar studies with other simulated data sets and possibly other algorithms. The rigorous assessment of correction algorithms in this work may facilitate further automation of data-analysis workflows

    Recent applications of chemometrics in one- and two-dimensional chromatography

    Get PDF
    The proliferation of increasingly more sophisticated analytical separation systems, often incorporating increasingly more powerful detection techniques, such as high-resolution mass spectrometry, causes an urgent need for highly efficient data-analysis and optimization strategies. This is especially true for comprehensive two-dimensional chromatography applied to the separation of very complex samples. In this contribution, the requirement for chemometric tools is explained and the latest developments in approaches for (pre-)processing and analyzing data arising from one- and two-dimensional chromatography systems are reviewed. The final part of this review focuses on the application of chemometrics for method development and optimization
    corecore